【URP Shader】Depth of Field - 景深

Posted by FlowingCrescent on 2021-08-07
Estimated Reading Time 18 Minutes
Words 3.6k In Total
Viewed Times

很久没有更技术方面的内容了,主要是因为入职了之后技术方面做的东西因为大家都懂的原因不方便分享,相对闲了点就打算做一下以前从未做过的DoF,恰好Catlike也有教程,他做的是build-in版,那么我就做个URP版本的好了。


为什么会有景深

image.png
其实这一点在GAMES101中闫老师讲解得非常清楚,就是因为某一点发出的光线经过凸透镜折射后,未必会在感光元件上达到同一个点,它们很可能会形成一个圆,而这个圆就叫Circle of Confusion(CoC)
而显然易见的,一个点离透镜过近或者过远,都会导致CoC的出现。

如图,稍远的花蕊产生了CoC

URP实践

在Catlike的文章中,一共用了5个Pass来实现DoF:

  1. 模拟CoC的强度值,输出为CoC Buffer
  2. 对CoC Buffer的降采样进行特殊操作
  3. 计算DoF,输出为DoF Buffer
  4. 对DoF Buffer进一步模糊
  5. 将DoF Buffer与原本的FrameBuffer混合

通过对这5步的拆解,就能够直观了解简单的DoF效果的制作流程了

CoC Buffer

一个与FrameBuffer同样宽高的Buffer,格式为RHalf(单通道16bit浮点)
image.png
将输出的CoC Buffer直接显示在屏幕上是这样,但其实它黑色区域储存了负值,这也是我第一次知道原来RT能够存负值……不过将RT想成是half的数组倒也能理解。
image.png

输出-coc值时

正负值正是用于区别近景与远景。
1
2
3
4
5
6
7
8
9
10
half frag(v2f i) : SV_Target {
half4 baseMap = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);
half depth = SampleSceneDepth(i.uv);
depth = LinearEyeDepth(depth, _ZBufferParams);

float coc = (depth - _FocusDistance) / _FocusRange;
coc = clamp(coc, -1, 1) * _BokehRadius;

return coc;
}
fragment shader部分也比较简单,就是深度减去一个焦距值,再进行除法来调节比值,然后被Clamp(-1,1)后映射到[-_BokehRadius, _BokehRadius],而`_BokehRadius`正是最后实现的模糊光圈大小(似乎Bokeh一词语义来自日语)

对CoC Buffer的降采样

首先说明一下为什么要降采样,因为之后在计算Bokeh时也同样用的是1/2宽高的RT,因此对CoC Buffer进行采样时会产生线性插值,而对于深度以及从深度延伸计算出来的值而言,被双线性插值是毫无意义的,因此我们需要提前对CoC Buffer进行一个特殊的降采样操作。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
half4 frag(v2f i) : SV_Target {
//-0.5x, -0.5y, 0.5x, 0.5y
float4 o = _MainTex_TexelSize.xyxy * float2(-0.5, 0.5).xxyy;

//4个斜向采样
half coc0 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.xy).r;
half coc1 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.zy).r;
half coc2 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.xw).r;
half coc3 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.zw).r;

half cocMin = min(min(min(coc0, coc1), coc2), coc3);
half cocMax = max(max(max(coc0, coc1), coc2), coc3);

//将绝对值最大的写入A通道
half coc = cocMax >= -cocMin ? cocMax : cocMin;

return half4(SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv).rgb, coc);
}

image.png
单独输出coc值的结果是这样的,当然肉眼上看不出跟之前的区别

计算Bokeh

这个Pass也是最关键的Pass了,其中不仅做了Bokeh的采样,还进行了前后景的混合
这部分的内容应该读代码以及注释比较容易理解:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
         // From https://github.com/Unity-Technologies/PostProcessing/
// blob/v2/PostProcessing/Shaders/Builtins/DiskKernels.hlsl
//特定的采样位置
#if defined(BOKEH_KERNEL_SMALL)
static const int kernelSampleCount = 16;
static const float2 kernel[kernelSampleCount] = {
float2(0, 0),
float2(0.54545456, 0),
float2(0.16855472, 0.5187581),
float2(-0.44128203, 0.3206101),
float2(-0.44128197, -0.3206102),
float2(0.1685548, -0.5187581),
float2(1, 0),
float2(0.809017, 0.58778524),
float2(0.30901697, 0.95105654),
float2(-0.30901703, 0.9510565),
float2(-0.80901706, 0.5877852),
float2(-1, 0),
float2(-0.80901694, -0.58778536),
float2(-0.30901664, -0.9510566),
float2(0.30901712, -0.9510565),
float2(0.80901694, -0.5877853),
};
#elif defined (BOKEH_KERNEL_MEDIUM)
static const int kernelSampleCount = 22;
static const float2 kernel[kernelSampleCount] = {
float2(0, 0),
float2(0.53333336, 0),
float2(0.3325279, 0.4169768),
float2(-0.11867785, 0.5199616),
float2(-0.48051673, 0.2314047),
float2(-0.48051673, -0.23140468),
float2(-0.11867763, -0.51996166),
float2(0.33252785, -0.4169769),
float2(1, 0),
float2(0.90096885, 0.43388376),
float2(0.6234898, 0.7818315),
float2(0.22252098, 0.9749279),
float2(-0.22252095, 0.9749279),
float2(-0.62349, 0.7818314),
float2(-0.90096885, 0.43388382),
float2(-1, 0),
float2(-0.90096885, -0.43388376),
float2(-0.6234896, -0.7818316),
float2(-0.22252055, -0.974928),
float2(0.2225215, -0.9749278),
float2(0.6234897, -0.7818316),
float2(0.90096885, -0.43388376),
};
#endif

//CoC值减去Bokeh大小,让光圈越边缘变得柔和
half Weigh (half coc, half radius) {
return saturate((coc - radius + 2) / 2);
}


half4 frag(v2f i) : SV_Target {
//当前位置的coc值
half coc = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex,i.uv).a;

half3 bgColor = 0, fgColor = 0;
half bgWeight = 0, fgWeight = 0;
//循环采样写定的位置数组
for (int k = 0; k < kernelSampleCount; k++) {
//预计采样位置(未归于Texel单位大小)
float2 o = kernel[k] * _BokehRadius;
//采样位置距离中心的距离(未归于Texel单位大小)
half radius = length(o);
//将采样位置调整至Texel单位大小
o *= _MainTex_TexelSize.xy;
//采样
half4 s = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o);

//被采样点是背景时
//取当前位置的coc值与采样位置的coc值的最小值,这是为了消除前景受背景影响的错误效果
//Weigh函数计算权重(即是否真的使用该点的颜色值),coc值越大,权重越大
half bgw = Weigh(max(0, min(s.a, coc)), radius);
bgColor += s.rgb * bgw;
bgWeight += bgw;

//被采样点是前景时就比较简单了,直接计算权重,因为背景受前景影响没问题
half fgw = Weigh(-s.a, radius);
fgColor += s.rgb * fgw;
fgWeight += fgw;
}
//将color映射回[0, 1]
bgColor *= 1 / (bgWeight + (bgWeight == 0));
fgColor *= 1 / (fgWeight + (fgWeight == 0));

//使用PI调节前后景混合程度
half bgfg = saturate(fgWeight * 3.14159265359 / kernelSampleCount);
half3 color = lerp(bgColor, fgColor, bgfg);

return float4(color, bgfg);
}

几个值的输出结果:
image.png

bgColor(背景图)

image.png

fgColor(前景图)

image.png

bgfg(混合权重)

image.png

color(混合结果)

对DoF Buffer进一步模糊

这步的代码与第二个Pass几乎一样,只是把取绝对值最大改成了取平均,而结果就是单纯的变模糊了,就不发图了。

1
2
3
4
5
6
7
8
9
10
11
half4 frag(v2f i) : SV_Target {
half4 baseMap = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);

float4 o = _MainTex_TexelSize.xyxy * float2(-0.5, 0.5).xxyy;
half4 s =
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.xy) +
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.zy) +
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.xw) +
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.zw);
return s * 0.25;
}

将DoF Buffer与原本的FrameBuffer混合

1
2
3
4
5
6
7
8
9
10
11
12
half4 frag(v2f i) : SV_Target {
half4 source = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);
half coc = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv).r;
half4 dof = SAMPLE_TEXTURE2D(_DofTex1, sampler_DofTex1, i.uv);

half dofStrength = smoothstep(0.1, 1, coc);

//dof.a即bgfg,表示是否为前景,而dofStrength表示背景
half3 color = lerp(source.rgb, dof.rgb,dofStrength + dof.a - dofStrength * dof.a);

return float4(color, source.a);
}

这个插值方式很奇妙,x + y - xy的形式是怎么凑出来的?
Catlike如此解释:
image.png

总之,当连续两次插值时(一次插值前景,一次插值背景),若第一次插值的比例记为x, 第二次插值的比例记为y,则可以直接合为一次插值,插值因子为 x + y - xy

不过Catlike后来还是把
half dofStrength = smoothstep(0.1, 1, coc);
改为了
half dofStrength = smoothstep(0.1, 1, abs(coc));
他表示不把前景的效果增强一些的话还是会出现一些视觉上的错误……

image.png

总之,我们的景深效果就这样大功告成了。


总结

景深在后处理中也算是比较复杂的一类了(从它需要5个不同的Pass中也可以看出来),GDC中也有不少针对景深的优化以及Trick,但我因为还未做过景深,没有基础理解,所以还未细看过,之后有空也可以研究一下。
这次比较值得注意的点有那么几个:

  1. RT能够存储负值
  2. 对深度相关内容的降采样需要特殊处理
  3. 连续两次的插值能够用数学方法优化成一次

Render Feature代码 & Shader

RenderFeature:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class myDepthOfField : ScriptableRendererFeature
{
[System.Serializable]
public class Setting
{
public RenderPassEvent passEvent = RenderPassEvent.AfterRenderingTransparents;
public Material DOFMat;
[Range(0.1f, 10000f)]
public float focusDistance = 10f;
[Range(0.1f, 10000f)]
public float focusRange = 3f;
[Range(1f, 10f)]
public float bokehRadius = 4f;
public int circleOfConfusionPass = 0;
public int preFilterPass = 1;
public int bokehPass = 2;
public int postFilterPass = 3;
public int combinePass = 4;
}
public Setting setting = new Setting();
class CustomRenderPass : ScriptableRenderPass
{
public int cocID = 0;
public int dof1 = 0;
public int dof2 = 0;
public int tempRT = 0;
public Setting setting;
public RenderTargetIdentifier source;
public CustomRenderPass(Setting set)
{
setting = set;
}

public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
cocID = Shader.PropertyToID("_CoCTex");
dof1 = Shader.PropertyToID("_DofTex1");
dof2 = Shader.PropertyToID("_DofTex2");
tempRT = Shader.PropertyToID("TemporaryTexture");
setting.DOFMat.SetFloat("_FocusDistance", setting.focusDistance);
setting.DOFMat.SetFloat("_FocusRange", setting.focusRange);
setting.DOFMat.SetFloat("_BokehRadius", setting.bokehRadius);

}


public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
CommandBuffer cmd = CommandBufferPool.Get("DepthOfField");
RenderTextureDescriptor opaqueDesc = renderingData.cameraData.cameraTargetDescriptor;
int width = opaqueDesc.width;
int height = opaqueDesc.height;

cmd.GetTemporaryRT(cocID, width, height, 0, FilterMode.Bilinear, RenderTextureFormat.RHalf, RenderTextureReadWrite.Linear);

//Pass 1, CoC
cmd.Blit(source, cocID, setting.DOFMat, setting.circleOfConfusionPass);


cmd.GetTemporaryRT(dof1, width / 2, height / 2, 0, FilterMode.Bilinear, opaqueDesc.colorFormat);
cmd.GetTemporaryRT(dof2, width / 2, height / 2, 0, FilterMode.Bilinear, opaqueDesc.colorFormat);

//Pass 2, Pre Filter
cmd.Blit(source, dof1, setting.DOFMat, setting.preFilterPass);
//cmd.Blit(dof1, source);

//Pass 3, Bokeh
cmd.Blit(dof1, dof2, setting.DOFMat, setting.bokehPass);
//cmd.Blit(dof2, source);

//Pass 4, Post Filter
cmd.Blit(dof2, dof1, setting.DOFMat, setting.postFilterPass);
//cmd.Blit(dof1, source);

cmd.GetTemporaryRT(tempRT, width, height, 0, FilterMode.Bilinear, opaqueDesc.colorFormat);

//Pass 5, Combine
cmd.Blit(source, tempRT, setting.DOFMat, setting.combinePass);
cmd.Blit(tempRT, source);


context.ExecuteCommandBuffer(cmd);
cmd.ReleaseTemporaryRT(cocID);
cmd.ReleaseTemporaryRT(dof1);
cmd.ReleaseTemporaryRT(dof2);
cmd.ReleaseTemporaryRT(tempRT);

}

/// Cleanup any allocated resources that were created during the execution of this render pass.
public override void FrameCleanup(CommandBuffer cmd)
{

}




}

CustomRenderPass m_ScriptablePass;

public override void Create()
{
m_ScriptablePass = new CustomRenderPass(setting);

// Configures where the render pass should be injected.
m_ScriptablePass.renderPassEvent = setting.passEvent;
}

// Here you can inject one or multiple render passes in the renderer.
// This method is called when setting up the renderer once per-camera.
public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
m_ScriptablePass.source = renderer.cameraColorTarget;
if (setting.DOFMat != null)
renderer.EnqueuePass(m_ScriptablePass);
}
}



Shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
Shader "Custom/myDoF" {
Properties {
_MainTex ("Example Texture", 2D) = "white" {}
}
SubShader {
Tags { "RenderType"="Opaque" "RenderPipeline"="UniversalRenderPipeline" }

HLSLINCLUDE
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareDepthTexture.hlsl"

CBUFFER_START(UnityPerMaterial)
float4 _MainTex_ST, _MainTex_TexelSize;
float _FocusDistance, _FocusRange, _BokehRadius;
CBUFFER_END
ENDHLSL

Pass {
Name "CircleOfConfusionPass"
Tags { "LightMode"="UniversalForward" }

HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

struct a2v {
float4 positionOS : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f {
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
};

TEXTURE2D(_MainTex);
SAMPLER(sampler_MainTex);

v2f vert(a2v v) {
v2f o;

o.positionCS = TransformObjectToHClip(v.positionOS.xyz);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}

half frag(v2f i) : SV_Target {
half4 baseMap = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);
half depth = SampleSceneDepth(i.uv);
depth = LinearEyeDepth(depth, _ZBufferParams);

float coc = (depth - _FocusDistance) / _FocusRange;
coc = clamp(coc, -1, 1) * _BokehRadius;

return coc;
}
ENDHLSL
}

Pass {
Name "preFilterPass"
Tags { "LightMode"="UniversalForward" }

HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

struct a2v {
float4 positionOS : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f {
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
};

TEXTURE2D(_MainTex);
SAMPLER(sampler_MainTex);
TEXTURE2D(_CoCTex);
SAMPLER(sampler_CoCTex);

v2f vert(a2v v) {
v2f o;

o.positionCS = TransformObjectToHClip(v.positionOS.xyz);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}

half4 frag(v2f i) : SV_Target {
//-0.5x, -0.5y, 0.5x, 0.5y
float4 o = _MainTex_TexelSize.xyxy * float2(-0.5, 0.5).xxyy;

//4个斜向采样
half coc0 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.xy).r;
half coc1 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.zy).r;
half coc2 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.xw).r;
half coc3 = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv + o.zw).r;

half cocMin = min(min(min(coc0, coc1), coc2), coc3);
half cocMax = max(max(max(coc0, coc1), coc2), coc3);

//将绝对值最大的写入A通道
half coc = cocMax >= -cocMin ? cocMax : cocMin;

return half4(SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv).rgb, coc);
return coc;
}
ENDHLSL
}



Pass {
Name "bokehPass"
Tags { "LightMode"="UniversalForward" }

HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#define BOKEH_KERNEL_MEDIUM

struct a2v {
float4 positionOS : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f {
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
};

TEXTURE2D(_MainTex);
SAMPLER(sampler_MainTex);


v2f vert(a2v v) {
v2f o;

o.positionCS = TransformObjectToHClip(v.positionOS.xyz);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}

// From https://github.com/Unity-Technologies/PostProcessing/
// blob/v2/PostProcessing/Shaders/Builtins/DiskKernels.hlsl
#if defined(BOKEH_KERNEL_SMALL)
static const int kernelSampleCount = 16;
static const float2 kernel[kernelSampleCount] = {
float2(0, 0),
float2(0.54545456, 0),
float2(0.16855472, 0.5187581),
float2(-0.44128203, 0.3206101),
float2(-0.44128197, -0.3206102),
float2(0.1685548, -0.5187581),
float2(1, 0),
float2(0.809017, 0.58778524),
float2(0.30901697, 0.95105654),
float2(-0.30901703, 0.9510565),
float2(-0.80901706, 0.5877852),
float2(-1, 0),
float2(-0.80901694, -0.58778536),
float2(-0.30901664, -0.9510566),
float2(0.30901712, -0.9510565),
float2(0.80901694, -0.5877853),
};
#elif defined (BOKEH_KERNEL_MEDIUM)
static const int kernelSampleCount = 22;
static const float2 kernel[kernelSampleCount] = {
float2(0, 0),
float2(0.53333336, 0),
float2(0.3325279, 0.4169768),
float2(-0.11867785, 0.5199616),
float2(-0.48051673, 0.2314047),
float2(-0.48051673, -0.23140468),
float2(-0.11867763, -0.51996166),
float2(0.33252785, -0.4169769),
float2(1, 0),
float2(0.90096885, 0.43388376),
float2(0.6234898, 0.7818315),
float2(0.22252098, 0.9749279),
float2(-0.22252095, 0.9749279),
float2(-0.62349, 0.7818314),
float2(-0.90096885, 0.43388382),
float2(-1, 0),
float2(-0.90096885, -0.43388376),
float2(-0.6234896, -0.7818316),
float2(-0.22252055, -0.974928),
float2(0.2225215, -0.9749278),
float2(0.6234897, -0.7818316),
float2(0.90096885, -0.43388376),
};
#endif

//CoC值减去Bokeh大小,让光圈越边缘变得柔和
half Weigh (half coc, half radius) {
return saturate((coc - radius + 2) / 2);
}


half4 frag(v2f i) : SV_Target {
//当前位置的coc值
half coc = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex,i.uv).a;

half3 bgColor = 0, fgColor = 0;
half bgWeight = 0, fgWeight = 0;
//循环采样写定的位置数组
for (int k = 0; k < kernelSampleCount; k++) {
//预计采样位置(未归于Texel单位大小)
float2 o = kernel[k] * _BokehRadius;
//采样位置距离中心的距离(未归于Texel单位大小)
half radius = length(o);
//将采样位置调整至Texel单位大小
o *= _MainTex_TexelSize.xy;
//采样
half4 s = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o);

//被采样点是背景时
//取当前位置的coc值与采样位置的coc值的最小值,这是为了消除前景受背景影响的错误效果
//Weigh函数计算权重(即是否真的使用该点的颜色值),coc值越大,权重越大
half bgw = Weigh(max(0, min(s.a, coc)), radius);
bgColor += s.rgb * bgw;
bgWeight += bgw;

//被采样点是前景时就比较简单了,直接计算权重,因为背景受前景影响没问题
half fgw = Weigh(-s.a, radius);
fgColor += s.rgb * fgw;
fgWeight += fgw;
}
//将color映射回[0, 1]
bgColor *= 1 / (bgWeight + (bgWeight == 0));
fgColor *= 1 / (fgWeight + (fgWeight == 0));

//使用PI调节前后景混合程度
half bgfg = saturate(fgWeight * 3.14159265359 / kernelSampleCount);
half3 color = lerp(bgColor, fgColor, bgfg);

return float4(color, bgfg);
}
ENDHLSL
}
Pass {
Name "postFilterPass"
Tags { "LightMode"="UniversalForward" }

HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

struct a2v {
float4 positionOS : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f {
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
};

TEXTURE2D(_MainTex);
SAMPLER(sampler_MainTex);

v2f vert(a2v v) {
v2f o;

o.positionCS = TransformObjectToHClip(v.positionOS.xyz);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}

half4 frag(v2f i) : SV_Target {
half4 baseMap = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);

float4 o = _MainTex_TexelSize.xyxy * float2(-0.5, 0.5).xxyy;
half4 s =
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.xy) +
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.zy) +
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.xw) +
SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv + o.zw);
return s * 0.25;
}
ENDHLSL
}


Pass {
Name "Combine"
Tags { "LightMode"="UniversalForward" }

HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

struct a2v {
float4 positionOS : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f {
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
};

TEXTURE2D(_MainTex);
SAMPLER(sampler_MainTex);
TEXTURE2D(_DofTex1);
SAMPLER(sampler_DofTex1);
TEXTURE2D(_CoCTex);
SAMPLER(sampler_CoCTex);


v2f vert(a2v v) {
v2f o;

o.positionCS = TransformObjectToHClip(v.positionOS.xyz);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}

half4 frag(v2f i) : SV_Target {
half4 source = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);
half coc = SAMPLE_TEXTURE2D(_CoCTex, sampler_CoCTex, i.uv).r;
half4 dof = SAMPLE_TEXTURE2D(_DofTex1, sampler_DofTex1, i.uv);

half dofStrength = smoothstep(0.1, 1, abs(coc));
half3 color = lerp(source.rgb, dof.rgb, dofStrength + dof.a - dofStrength * dof.a);

return float4(color, source.a);
}
ENDHLSL
}
}
}

感谢您阅读完本文,若您认为本文对您有所帮助,可以将其分享给其他人;若您发现文章对您的利益产生侵害,请联系作者进行删除。